perf: cadence batch repair for large construction#369
Conversation
acgetchell
commented
May 9, 2026
- Add a batch repair policy option to ConstructionOptions so bulk insertion can repair local Delaunay fronts every N insertions.
- Carry pending local repair seeds through batch construction and finalization so cadenced repairs cover the accumulated frontier.
- Report construction telemetry for locate walks, conflict regions, global exterior scans, skipped-vertex budgets, and initial-simplex A/B runs in the large-scale debug harness.
- Add a batch repair policy option to ConstructionOptions so bulk insertion can repair local Delaunay fronts every N insertions. - Carry pending local repair seeds through batch construction and finalization so cadenced repairs cover the accumulated frontier. - Report construction telemetry for locate walks, conflict regions, global exterior scans, skipped-vertex budgets, and initial-simplex A/B runs in the large-scale debug harness.
WalkthroughAdds per-insertion and construction telemetry, threads telemetry through insertion/retries, introduces locality helpers and ValidationCadence, implements cadence-driven pending local repairs for batch construction, refactors flips for timing/diagnostics, and updates the triangulation facade, prelude exports, docs, tests, benches, and debug harness. ChangesTelemetry, Locality, and Batch Repair
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related issues
Possibly related PRs
Poem
✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
|
Up to standards ✅🟢 Issues
|
| Metric | Results |
|---|---|
| Complexity | 161 |
🟢 Coverage 87.23% diff coverage · -0.27% coverage variation
Metric Results Coverage variation ✅ -0.27% coverage variation (-1.00%) Diff coverage ✅ 87.23% diff coverage Coverage variation details
Coverable lines Covered lines Coverage Common ancestor commit (aa0de2c) 53884 48328 89.69% Head commit (731444d) 57156 (+3272) 51109 (+2781) 89.42% (-0.27%) Coverage variation is the difference between the coverage for the head and common ancestor commits of the pull request branch:
<coverage of head commit> - <coverage of common ancestor commit>Diff coverage details
Coverable lines Covered lines Diff coverage Pull request (#369) 4315 3764 87.23% Diff coverage is the percentage of lines that are covered by tests out of the coverable lines that the pull request added or modified:
<covered lines added or modified>/<coverable lines added or modified> * 100%
NEW Get contextual insights on your PRs based on Codacy's metrics, along with PR and Jira context, without leaving GitHub. Enable AI reviewer
TIP This summary will be updated as you push new changes.
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (4)
src/core/operations.rs (1)
233-274: 💤 Low valueConsider using
u64for nanosecond timing.The
global_conflict_scan_nanosfield usesu128, but the documentation states this struct captures telemetry for "one transactional insertion." For single-insertion timing,u64provides ample range (~584 years of nanoseconds) and matchesstd::time::Duration's internal representation. Usingu64would be simpler and save 8 bytes per instance.♻️ Suggested change
- /// Wall-clock nanoseconds spent in global exterior-point conflict scans. - pub global_conflict_scan_nanos: u128, + /// Wall-clock nanoseconds spent in global exterior-point conflict scans. + pub global_conflict_scan_nanos: u64,🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/core/operations.rs` around lines 233 - 274, The field global_conflict_scan_nanos in struct InsertionTelemetry is using u128 but should be u64 to match Duration semantics and save space; change the type of InsertionTelemetry::global_conflict_scan_nanos from u128 to u64, update any code that assigns, accumulates, or converts this field (search for references to InsertionTelemetry and global_conflict_scan_nanos, as well as any add/extend/Duration::as_nanos() uses) to cast or use as u64 safely, and update any tests or Default/serialization code that assume u128 accordingly.src/triangulation/delaunay.rs (1)
7956-7969: ⚡ Quick winAdd one end-to-end test for the new
EveryNcadence path.The new tests only validate the builder/getter. They do not exercise the carried-frontier behavior in
pending_repair_seeds/finalize_bulk_construction, so a regression in the actual cadenced repair path would still pass.As per coding guidelines, "Unit tests must cover known values, error paths, and dimension-generic correctness."
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/triangulation/delaunay.rs` around lines 7956 - 7969, The test currently only verifies ConstructionOptions getters; add an end-to-end unit test (or extend test_construction_options_builder_roundtrip) that uses ConstructionOptions::with_batch_repair_policy(DelaunayRepairPolicy::EveryN(...)) and drives a bulk insertion path so the cadenced repair branch is executed: construct a builder/triangulation, call the bulk insertion routine that leads to pending_repair_seeds and then call finalize_bulk_construction, and assert that repairs occurred at the expected cadence (e.g., by checking pending_repair_seeds is drained, internal repair count/state, or final triangulation invariants). Ensure the test references ConstructionOptions, DelaunayRepairPolicy::EveryN, pending_repair_seeds, and finalize_bulk_construction so the EveryN path is actually exercised rather than only testing getters.tests/large_scale_debug.rs (2)
1180-1206: ⚡ Quick winAdd a
total == 0case to the skip-percentage test.
skip_percentage(x, 0)is the one branch with explicit short-circuit logic and is currently unexercised bytest_skip_percentage_reports_ratio. A single extra assertion locks in the divide-by-zero contract.🧪 Suggested addition
#[test] fn test_skip_percentage_reports_ratio() { assert!((skip_percentage(0, 100) - 0.0).abs() < f64::EPSILON); assert!((skip_percentage(4, 400) - 1.0).abs() < f64::EPSILON); assert!((skip_percentage(12, 100) - 12.0).abs() < f64::EPSILON); + // total == 0 must short-circuit and not divide by zero. + assert!((skip_percentage(0, 0) - 0.0).abs() < f64::EPSILON); + assert!((skip_percentage(5, 0) - 0.0).abs() < f64::EPSILON); }Optional:
test_initial_simplex_strategy_from_name_maps_ab_switchalso exercisesinitial_simplex_strategy_nameat line 1196 — splitting into two tests would make failures point at the right helper, but that's purely cosmetic.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@tests/large_scale_debug.rs` around lines 1180 - 1206, Add an assertion to test_skip_percentage_reports_ratio that exercises the total == 0 short-circuit in skip_percentage; e.g., assert that skip_percentage(0, 0) (or skip_percentage(any, 0)) equals 0.0 using the same f64::EPSILON comparison pattern so the divide-by-zero branch is covered.
346-377: 💤 Low valueThe
#[non_exhaustive]enum makes the wildcard ininitial_simplex_strategy_namenecessary but suggests adding logging when unknown variants are encountered.
InitialSimplexStrategyis marked#[non_exhaustive], so the wildcard arm is not dead code—it will handle future variants. However, a debug harness should surface when unknown variants appear. Consider adding adebug!orwarn!tracing statement in the wildcard arm to catch future additions early:const fn initial_simplex_strategy_name(strategy: InitialSimplexStrategy) -> &'static str { match strategy { InitialSimplexStrategy::First => "first", InitialSimplexStrategy::Balanced => "balanced", _ => { debug!("unknown InitialSimplexStrategy variant"); "unknown" } } }The asymmetry across the three helpers is reasonable:
name()converts to a display string,from_name()parses user input withNonefallback, andfrom_env()panics on invalid configuration. Explicit logging makes the behavior intentional and catches variants that slip through.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@tests/large_scale_debug.rs` around lines 346 - 377, The wildcard arm in initial_simplex_strategy_name should log unexpected #[non_exhaustive] variants so future additions are visible; change initial_simplex_strategy_name from const fn to a normal fn (so logging macros can be used) and add a tracing::debug! (or tracing::warn!) call in the `_` arm that includes the strategy (or its discriminant/string) before returning "unknown"; keep initial_simplex_strategy_from_name and initial_simplex_strategy_from_env behavior unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@src/core/triangulation.rs`:
- Around line 118-120: The import list brings in locate but it's only used in a
debug-only block; wrap the locate import with a debug_assertions cfg so release
builds don't show it as unused (e.g., conditionally import locate when
cfg(debug_assertions)). Keep the other imports (ConflictError, LocateError,
LocateResult, LocateStats, extract_cavity_boundary, find_conflict_region,
locate_by_scan, locate_with_stats) unchanged and only apply the cfg to the
locate symbol referenced in the debug block around the locate usage.
- Around line 5213-5245: The locate_by_scan() retry path currently bypasses the
shared telemetry updater, so increment the scan-fallback counter and record the
locate outcome there: in the code path that calls locate_by_scan() as a retry
(the fallback branch that invokes locate_by_scan()), add
telemetry.locate_scan_fallbacks =
telemetry.locate_scan_fallbacks.saturating_add(1) and then call the common
recorder (e.g., record_locate_telemetry or the helper used elsewhere such as
record_locate_outcome) with the LocateResult and LocateStats from that retry so
locate_calls, locate_walk_steps_*, locate_hint_uses and located_* are accounted
for the scan-retry case as well. Ensure you pass the same InsertionTelemetry,
LocateResult and LocateStats types used by record_locate_telemetry.
In `@src/triangulation/delaunay.rs`:
- Around line 4706-4718: The current match always maps any error from
repair_delaunay_local_single_pass into
TriangulationConstructionError::GeometricDegeneracy; instead, preserve and
propagate internal repair errors untouched so non-retryable failures fail fast.
Update the match on repair_result to: on Ok(()) return Ok(()); on Err(e) if e is
already a DelaunayTriangulationConstructionError (or can be converted from the
underlying repair error) propagate/return that error directly; otherwise wrap
only true geometric degeneracy cases in
TriangulationConstructionError::GeometricDegeneracy as before. Refer to
repair_result, repair_delaunay_local_single_pass,
DelaunayTriangulationConstructionError, and
TriangulationConstructionError::GeometricDegeneracy to locate and implement the
change.
---
Nitpick comments:
In `@src/core/operations.rs`:
- Around line 233-274: The field global_conflict_scan_nanos in struct
InsertionTelemetry is using u128 but should be u64 to match Duration semantics
and save space; change the type of
InsertionTelemetry::global_conflict_scan_nanos from u128 to u64, update any code
that assigns, accumulates, or converts this field (search for references to
InsertionTelemetry and global_conflict_scan_nanos, as well as any
add/extend/Duration::as_nanos() uses) to cast or use as u64 safely, and update
any tests or Default/serialization code that assume u128 accordingly.
In `@src/triangulation/delaunay.rs`:
- Around line 7956-7969: The test currently only verifies ConstructionOptions
getters; add an end-to-end unit test (or extend
test_construction_options_builder_roundtrip) that uses
ConstructionOptions::with_batch_repair_policy(DelaunayRepairPolicy::EveryN(...))
and drives a bulk insertion path so the cadenced repair branch is executed:
construct a builder/triangulation, call the bulk insertion routine that leads to
pending_repair_seeds and then call finalize_bulk_construction, and assert that
repairs occurred at the expected cadence (e.g., by checking pending_repair_seeds
is drained, internal repair count/state, or final triangulation invariants).
Ensure the test references ConstructionOptions, DelaunayRepairPolicy::EveryN,
pending_repair_seeds, and finalize_bulk_construction so the EveryN path is
actually exercised rather than only testing getters.
In `@tests/large_scale_debug.rs`:
- Around line 1180-1206: Add an assertion to test_skip_percentage_reports_ratio
that exercises the total == 0 short-circuit in skip_percentage; e.g., assert
that skip_percentage(0, 0) (or skip_percentage(any, 0)) equals 0.0 using the
same f64::EPSILON comparison pattern so the divide-by-zero branch is covered.
- Around line 346-377: The wildcard arm in initial_simplex_strategy_name should
log unexpected #[non_exhaustive] variants so future additions are visible;
change initial_simplex_strategy_name from const fn to a normal fn (so logging
macros can be used) and add a tracing::debug! (or tracing::warn!) call in the
`_` arm that includes the strategy (or its discriminant/string) before returning
"unknown"; keep initial_simplex_strategy_from_name and
initial_simplex_strategy_from_env behavior unchanged.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
Run ID: 73680a4b-9c5d-417e-8155-af53bb38dec9
📒 Files selected for processing (5)
docs/dev/debug_env_vars.mdsrc/core/operations.rssrc/core/triangulation.rssrc/triangulation/delaunay.rstests/large_scale_debug.rs
- Return live insertion-created cells as repair seeds so batch construction can accumulate local frontiers without scanning vertex stars globally. - Move repair seed frontier helpers into triangulation locality utilities and defer broad repair fallback to final construction repair. - Extend construction telemetry and large-scale debug output with seed accumulation timing.
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #369 +/- ##
==========================================
- Coverage 89.66% 89.39% -0.27%
==========================================
Files 58 61 +3
Lines 53693 56965 +3272
==========================================
+ Hits 48144 50925 +2781
- Misses 5549 6040 +491
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
- Preserve the terminal locate cell so exterior insertion can seed nearby repair work without scanning the full triangulation. - Skip global conflict-region scans during exterior hull extension and defer broad Delaunay cleanup to cadenced and final repair. - Move local repair seed bookkeeping helpers into triangulation locality utilities.
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/core/triangulation.rs (1)
5477-5528:⚠️ Potential issue | 🟠 Major | ⚡ Quick winNormalize
Outside + Some(empty)to the same seed path asOutside + None.Both branches fall through to hull extension, but only the
Nonecase gathersterminal_cell-basedrepair_seed_cells. If an upstream caller materializes “no conflict region” asSome(&empty_buf), cadenced repair loses the local frontier entirely, so behavior now depends on how the caller represents the same state. Reuse theNonepath for empty buffers before buildingInsertionSite::Exterior.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/core/triangulation.rs` around lines 5477 - 5528, The Outside+Some(empty) path is currently treated differently from Outside+None and loses the terminal_cell-based repair_seed_cells; modify the (LocateResult::Outside, Some(cells)) branch to normalize empty buffers by computing repair_seed_cells the same way as the None branch: if cells.is_empty() then run the same conditional that checks !locate_stats.fell_back_to_scan() && self.tds.contains_cell(locate_trace.terminal_cell) and call collect_local_exterior_conflict_seed_cells(&self.tds, &self.kernel, &point, locate_trace.terminal_cell) to produce repair_seed_cells (or CellKeyBuffer::new() otherwise), then build InsertionSite::Exterior { conflict_cells: None, repair_seed_cells } so both empty Some and None follow the identical seed-path; reference LocateResult::Outside, locate_trace.terminal_cell, collect_local_exterior_conflict_seed_cells, InsertionSite::Exterior, repair_seed_cells, and CellKeyBuffer.
🧹 Nitpick comments (1)
src/triangulation/locality.rs (1)
51-68: ⚡ Quick winAvoid quadratic dedupe in
append_live_unique_cell_seedsfor large batches.Line 62 performs a linear
containscheck per candidate. In cadence/batch paths this can become a hotspot; build one set from existing seeds and use O(1) inserts.⚡ Proposed refactor
pub fn append_live_unique_cell_seeds<T, U, V, const D: usize>( tds: &Tds<T, U, V, D>, candidate_seed_cells: &[CellKey], seed_cells: &mut CellKeyBuffer, ) -> usize where U: DataType, V: DataType, { let mut added = 0usize; + let mut seen: FastHashSet<CellKey> = seed_cells.iter().copied().collect(); for &cell_key in candidate_seed_cells { - if tds.contains_cell(cell_key) && !seed_cells.contains(&cell_key) { + if tds.contains_cell(cell_key) && seen.insert(cell_key) { seed_cells.push(cell_key); added = added.saturating_add(1); } } added }🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/triangulation/locality.rs` around lines 51 - 68, append_live_unique_cell_seeds currently does a linear seed_cells.contains check per candidate, causing O(n*m) behavior; change it to build a HashSet (or similar O(1) lookup) from the existing seed_cells once, then iterate candidate_seed_cells and for each cell_key call tds.contains_cell(cell_key) and check membership in that set, inserting into both the set and seed_cells (CellKeyBuffer) when new; also consider reserving capacity in the set to avoid rehashes. Use the function and variable names append_live_unique_cell_seeds, seed_cells, candidate_seed_cells, CellKeyBuffer, CellKey, and tds.contains_cell to locate and update the code.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@src/core/triangulation.rs`:
- Around line 4233-4239: The counter/telemetry is being recorded even when
validate_after_insertion(...) only gates and skips real checks; change the
call/site so telemetry (Self::record_topology_validation_telemetry and
incrementing topology_validation_calls) is only invoked when actual validation
work runs. Do this by having validate_after_insertion return a discriminant
(e.g., ValidationOutcome or (bool did_run, Result<(),E>)) or expose a helper
like validation_will_run(...), then: call Instant::now() and record
duration/metric only when did_run is true (i.e., real Level3/link checks
executed). Apply the same change to the other occurrence mentioned (the block
around 4289-4295) so skipped-policy invocations do not increment the validation
counter.
In `@src/triangulation/locality.rs`:
- Around line 18-23: The public wrapper struct LocalConflictSeedCells can be
silently discarded; add the #[must_use] attribute to the LocalConflictSeedCells
declaration so callers get a compiler warning if they ignore the result (leave
the public fields seed_cells: CellKeyBuffer and conflict_cells_found: usize
unchanged). Locate the struct named LocalConflictSeedCells and prepend
#[must_use] directly above its declaration.
---
Outside diff comments:
In `@src/core/triangulation.rs`:
- Around line 5477-5528: The Outside+Some(empty) path is currently treated
differently from Outside+None and loses the terminal_cell-based
repair_seed_cells; modify the (LocateResult::Outside, Some(cells)) branch to
normalize empty buffers by computing repair_seed_cells the same way as the None
branch: if cells.is_empty() then run the same conditional that checks
!locate_stats.fell_back_to_scan() &&
self.tds.contains_cell(locate_trace.terminal_cell) and call
collect_local_exterior_conflict_seed_cells(&self.tds, &self.kernel, &point,
locate_trace.terminal_cell) to produce repair_seed_cells (or
CellKeyBuffer::new() otherwise), then build InsertionSite::Exterior {
conflict_cells: None, repair_seed_cells } so both empty Some and None follow the
identical seed-path; reference LocateResult::Outside,
locate_trace.terminal_cell, collect_local_exterior_conflict_seed_cells,
InsertionSite::Exterior, repair_seed_cells, and CellKeyBuffer.
---
Nitpick comments:
In `@src/triangulation/locality.rs`:
- Around line 51-68: append_live_unique_cell_seeds currently does a linear
seed_cells.contains check per candidate, causing O(n*m) behavior; change it to
build a HashSet (or similar O(1) lookup) from the existing seed_cells once, then
iterate candidate_seed_cells and for each cell_key call
tds.contains_cell(cell_key) and check membership in that set, inserting into
both the set and seed_cells (CellKeyBuffer) when new; also consider reserving
capacity in the set to avoid rehashes. Use the function and variable names
append_live_unique_cell_seeds, seed_cells, candidate_seed_cells, CellKeyBuffer,
CellKey, and tds.contains_cell to locate and update the code.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
Run ID: 4f67cbdc-85d6-4edc-83ac-4fbec016c860
📒 Files selected for processing (3)
src/core/algorithms/locate.rssrc/core/triangulation.rssrc/triangulation/locality.rs
- Keep exterior repair seeding local when conflict buffers are empty, avoiding global conflict-region scans during insertion. - Gate topology-validation telemetry on actual validation work and add typed validation cadences for large-scale diagnostics. - Preserve typed Delaunay repair failures with construction repair phases and keep mutation repair gates independent of insertion cadence. - Add focused triangulation construction, repair, and validation preludes across docs, examples, benches, and tests.
- Add an adaptive seed-backlog trigger for batch local Delaunay repair. - Track local repair frontier sizes and cadence versus backlog triggers. - Expose the 3D large-scale repair interval in the debug just recipe. - Clarify batch repair policy docs and cover trigger/telemetry behavior with focused tests.
There was a problem hiding this comment.
🧹 Nitpick comments (1)
justfile (1)
236-237: ⚡ Quick winAlign
repair_everydefault with harness/docs default to avoid drift.Line 236 and Line 284 default to
repair_every=2; if the large-scale harness default is4, this recipe silently changes baseline behavior and can skew perf comparisons. Prefer using the same default value (or defer to harness default when not provided) and keep help text aligned.Proposed alignment patch
-debug-large-scale-3d n="10000" repair_every="2": +debug-large-scale-3d n="10000" repair_every="4": @@ - `@echo` " just debug-large-scale-3d [n] [repair_every] # Issue `#341`: 3D scalability (defaults n=10000, repair_every=2)" + `@echo` " just debug-large-scale-3d [n] [repair_every] # Issue `#341`: 3D scalability (defaults n=10000, repair_every=4)"Also applies to: 284-284
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@justfile` around lines 236 - 237, The recipe debug-large-scale-3d currently defaults repair_every=2 which diverges from the large-scale harness default of 4; update the default to repair_every=4 (and likewise update the other occurrence referenced) or make the parameter optional so the harness default is used, and ensure any help text/comment near debug-large-scale-3d reflects the new default to keep behavior and documentation aligned.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Nitpick comments:
In `@justfile`:
- Around line 236-237: The recipe debug-large-scale-3d currently defaults
repair_every=2 which diverges from the large-scale harness default of 4; update
the default to repair_every=4 (and likewise update the other occurrence
referenced) or make the parameter optional so the harness default is used, and
ensure any help text/comment near debug-large-scale-3d reflects the new default
to keep behavior and documentation aligned.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
Run ID: f49a2a05-d31a-43ce-8847-41a30137a92b
📒 Files selected for processing (3)
justfilesrc/triangulation/delaunay.rstests/large_scale_debug.rs
🚧 Files skipped from review as they are similar to previous changes (1)
- tests/large_scale_debug.rs
- Use the flip-based Delaunay verifier when accepting shuffled construction retries instead of the brute-force property scan. - Add construction phase telemetry for preprocessing, insertion, finalization, and final validation timing. - Default the large-scale debug repair interval to 2 for the 3K repair sweep.
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
tests/large_scale_debug.rs (1)
1151-1172:⚠️ Potential issue | 🟠 Major | 🏗️ Heavy liftIncremental mode never populates the new telemetry sections.
summary.telemetryandsummary.slow_insertionsare only filled in the batchConstructionStatistics -> InsertionSummarypath. In this manual insertion loop we only update attempt/skip counters, so the new telemetry output stays empty forConstructionMode::Incremental, which is the mode this harness uses for step-by-step debugging/profiling.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@tests/large_scale_debug.rs` around lines 1151 - 1172, The incremental insertion loop doesn't populate summary.telemetry or summary.slow_insertions because only attempt/skip counters are updated; after each insert_with_statistics match arm (both InsertionOutcome::Inserted and ::Skipped) capture the returned stats and call the same telemetry/slow-insertion aggregation used by the batch path—e.g. invoke the summary telemetry/slow-insertion aggregation routines (the same logic used when converting ConstructionStatistics -> InsertionSummary) inside the Ok((..., stats)) arms so InsertionSummary (summary.telemetry and summary.slow_insertions) is updated for ConstructionMode::Incremental as well.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@tests/large_scale_debug.rs`:
- Around line 1197-1199: The validation cadence is currently advanced using the
loop index (idx + 1), which moves the schedule for duplicates/degenerate
vertices; change the call to validation_cadence.should_validate to use the count
of successful insertions (e.g., successful_inserts + 1) instead of idx + 1 so
validation is driven by actual mutations, and ensure successful_inserts is only
incremented when a vertex was truly inserted before calling
dt.as_triangulation().is_valid(); update any surrounding logic that assumes
idx-based advancement accordingly.
---
Outside diff comments:
In `@tests/large_scale_debug.rs`:
- Around line 1151-1172: The incremental insertion loop doesn't populate
summary.telemetry or summary.slow_insertions because only attempt/skip counters
are updated; after each insert_with_statistics match arm (both
InsertionOutcome::Inserted and ::Skipped) capture the returned stats and call
the same telemetry/slow-insertion aggregation used by the batch path—e.g. invoke
the summary telemetry/slow-insertion aggregation routines (the same logic used
when converting ConstructionStatistics -> InsertionSummary) inside the Ok((...,
stats)) arms so InsertionSummary (summary.telemetry and summary.slow_insertions)
is updated for ConstructionMode::Incremental as well.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
Run ID: 098761e4-6c85-4dd0-b091-cdbe5a260011
📒 Files selected for processing (4)
docs/dev/debug_env_vars.mdjustfilesrc/triangulation/delaunay.rstests/large_scale_debug.rs
🚧 Files skipped from review as they are similar to previous changes (1)
- docs/dev/debug_env_vars.md
- Default batch construction repair to EveryN(2) while keeping direct incremental repair at EveryInsertion. - Preserve final repair and validation as the acceptance gate for valid Delaunay output. - Surface construction skip and slow-insertion diagnostics through the construction prelude. - Align repair-policy docs with the batch default and the current 500/3000-point proxy results.
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@src/triangulation/locality.rs`:
- Line 19: This module is declared as pub(crate) so using pub(crate) on items
inside it is redundant; change the visibility of all module-scoped exported
items (e.g., the struct LocalConflictSeedCells and the other pub(crate)
types/functions found in this file) from pub(crate) to pub(super) so they are
exposed only to the parent module as intended; locate each occurrence of
"pub(crate)" in this file and replace it with "pub(super)".
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
Run ID: 5ab6e1b4-e314-48bf-ac4f-54e129b8cc1c
📒 Files selected for processing (9)
docs/numerical_robustness_guide.mddocs/workflows.mdexamples/pachner_roundtrip_4d.rssrc/core/triangulation.rssrc/lib.rssrc/triangulation/delaunay.rssrc/triangulation/locality.rstests/large_scale_debug.rstests/prelude_exports.rs
✅ Files skipped from review due to trivial changes (2)
- examples/pachner_roundtrip_4d.rs
- docs/numerical_robustness_guide.md
🚧 Files skipped from review as they are similar to previous changes (4)
- tests/prelude_exports.rs
- tests/large_scale_debug.rs
- src/lib.rs
- src/core/triangulation.rs
- Compare quality metrics only for cells shared across independently transformed triangulations, rejecting cases with no comparable cells. - Narrow locality repair helper visibility where the current module graph allows it while preserving core insertion repair access. - Clarify that non-substantive PRs may be declined unless justified by cleanup or tooling needs.
|
@coderabbitai full review |
✅ Actions performedFull review triggered. |
There was a problem hiding this comment.
Actionable comments posted: 7
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
src/core/triangulation.rs (2)
4129-4168:⚠️ Potential issue | 🟠 Major | ⚡ Quick winSeparate full validation from link-only validation in telemetry.
validation_after_insertion_work()now distinguishesFullValidationfromRequiredTopologyLinks, but both paths still feed the sametopology_validation_calls/nanoscounters. That makes the new harness numbers hard to interpret: a full Level-3 pass and a cheaper ridge/vertex-link-only pass look identical in telemetry.Please either record separate metrics per
InsertionValidationWorkor keep the existingtopology_validation_*fields exclusive toFullValidation.Also applies to: 4257-4266, 4319-4328
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/core/triangulation.rs` around lines 4129 - 4168, The telemetry currently conflates full and link-only validations; update validate_after_insertion so the metrics reflect the InsertionValidationWork variant returned by validation_after_insertion_work(suspicion): call validation_after_insertion_work, log the trigger as now, then in the match arm for InsertionValidationWork::FullValidation wrap the is_valid() call with the existing topology_validation_* counters (or increment the existing counters) and in the InsertionValidationWork::RequiredTopologyLinks arm record a separate pair of metrics (e.g., topology_link_validation_calls and topology_link_validation_nanos) or skip touching the existing topology_validation_* counters if you prefer exclusivity; adjust any uses in log_validation_trigger_if_enabled/is_valid/validate_required_topology_links if they currently assume a single metric so telemetry distinguishes full vs link-only runs.
5519-5535:⚠️ Potential issue | 🟠 Major | ⚡ Quick winDon't count caller-provided conflict buffers as conflict-region discovery.
These branches increment
conflict_region_calls/conflict_region_cells_*even though no conflict search ran; the cells were supplied by the caller. That will overstate the new conflict-region telemetry whenever higher layers precompute conflicts, and makes those metrics incomparable with the branches that actually callfind_conflict_region()orcollect_local_exterior_conflict_seed_cells().A separate
provided_conflict_cells_*metric, or limitingconflict_region_*to true discovery work, would keep the harness output honest.Also applies to: 5579-5590
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/core/triangulation.rs` around lines 5519 - 5535, The telemetry call record_conflict_region_telemetry should not be invoked for caller-provided conflict buffers (the match arm handling LocateResult::InsideCell(...), Some(cells) where cells came from the caller); remove or skip that call in these arms and instead either increment a new provided_conflict_cells/provided_conflict_calls metric or only call record_conflict_region_telemetry from the code paths that actually perform discovery (the branches that call find_conflict_region() or collect_local_exterior_conflict_seed_cells()); update both occurrences (the InsideCell Some(cells) arm and the analogous branch around lines 5579–5590) to use ensure_non_empty_conflict_cells for fallback behavior but not count them as discovered conflict-region telemetry.src/core/algorithms/locate.rs (1)
1-1: 🛠️ Refactor suggestion | 🟠 Major | ⚡ Quick winAdd file-level unsafe prohibition to match repository Rust safety policy.
This changed Rust file should explicitly forbid unsafe code.
Proposed patch
//! Point location algorithms for triangulations. +#![forbid(unsafe_code)] //! //! Implements facet-walking point location for finding the cell containingAs per coding guidelines: "
**/*.rs: Use#![forbid(unsafe_code)]- unsafe code is strictly forbidden in Rust code".🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/core/algorithms/locate.rs` at line 1, Add the crate-level forbid by inserting the attribute to prohibit unsafe code at the top of this Rust file: add #![forbid(unsafe_code)] as the very first item in src/core/algorithms/locate.rs so the module-level policy is enforced; ensure it appears before any module docs or use/statements so the compiler applies the ban to all functions/types in this file (e.g., the point-location functions and any helper structs/enums defined here).
♻️ Duplicate comments (1)
tests/large_scale_debug.rs (1)
1198-1200:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winValidation cadence still advances on attempted vertices, not successful insertions.
The cadence call still passes
idx + 1here, so duplicates/degeneracies advance the validation schedule even though no mutation occurred. This drifts validation relative to repair (which is keyed off successful inserts) and can re-runis_valid()against an unchanged state. Drive cadence fromsummary.insertedand gate validation on a vertex actually being inserted this iteration.🔧 Proposed fix
+ let mut inserted_this_round = false; match dt.insert_with_statistics(vertex) { Ok((InsertionOutcome::Inserted { .. }, stats)) => { summary.record_inserted(stats); + inserted_this_round = true; } Ok((InsertionOutcome::Skipped { error }, stats)) => { @@ - if had_cells - && validation_cadence.should_validate(idx + 1) + if had_cells + && inserted_this_round + && validation_cadence.should_validate(summary.inserted) && let Err(e) = dt.as_triangulation().is_valid() {🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@tests/large_scale_debug.rs` around lines 1198 - 1200, The validation cadence is advanced using idx + 1 even when no vertex was actually inserted, causing validation to drift; change the gating so that you only call validation_cadence.should_validate(...) when this iteration inserted a vertex (use summary.inserted as the counter rather than idx + 1) and only invoke dt.as_triangulation().is_valid() when summary.inserted indicates a successful insertion this loop; update the condition around had_cells/validation_cadence.should_validate(...) to check summary.inserted (and that the current vertex was inserted) before running is_valid().
🧹 Nitpick comments (1)
tests/proptest_triangulation.rs (1)
104-105: ⚡ Quick winTighten the proptest acceptance condition to avoid near-vacuous passes.
At Line 147,
prop_assume!(matched_cells > 0)allows cases where only one cell is compared, which can under-test the transformation invariants.As per coding guidelines: “Proptests under `tests/proptest_*.rs` must cover algebraic and topological invariants … not just 'does it not panic'.”Proposed adjustment
- prop_assume!(matched_cells > 0); + let total_cells = tds_orig.number_of_cells().max(1); + // Keep degeneracy-tolerant behavior, but require meaningful overlap. + prop_assume!(matched_cells.saturating_mul(2) >= total_cells);Also applies to: 140-141, 147-147
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@tests/proptest_triangulation.rs` around lines 104 - 105, The current proptest acceptance uses prop_assume!(matched_cells > 0) which allows near-vacuous runs; change the assumption to require multiple matched comparisons (e.g., prop_assume!(matched_cells > 1) or prop_assume!(matched_cells >= 2)) and, if available, also assert the total number of cells considered is >1 so the test exercises non-trivial triangulation behavior; update the prop_assume! call that references matched_cells accordingly to ensure the test only accepts cases with at least two matched cells (and adjust any related counters or variable names that drive the comparison loop).
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@benches/ci_performance_suite.rs`:
- Around line 39-41: Add the crate-level attribute to forbid unsafe code by
inserting #![forbid(unsafe_code)] at the very top of the Rust source (before any
items or uses) in this benchmark file; ensure the attribute is the first
non-comment token so it applies to the crate, and run cargo check to confirm no
build issues from the added attribute.
In `@benches/topology_guarantee_construction.rs`:
- Around line 17-19: Add the crate-level attribute to forbid unsafe code by
inserting #![forbid(unsafe_code)] at the very top of the bench file (above the
use/imports) so the file complies with the policy; this affects the module that
imports DelaunayTriangulation, TopologyGuarantee, DelaunayRepairPolicy and
ValidationPolicy and must appear before those use statements.
In `@examples/delaunayize_repair.rs`:
- Around line 21-24: Add the crate-level prohibition by inserting
#![forbid(unsafe_code)] as the very first line of the Rust example file that
contains the Delaunay imports (the file that uses
DelaunayTriangulationConstructionError, DelaunayTriangulationValidationError and
the delaunay::prelude::* modules); this ensures the example conforms to the
Rust-file policy forbidding unsafe code.
In `@examples/numerical_robustness.rs`:
- Around line 12-15: This example file is missing the crate-level attribute that
forbids unsafe Rust; add the declaration `#![forbid(unsafe_code)]` at the very
top of the file (before the `use` statements and any other items) so the
compiler enforces no unsafe code in this example (affects this example
containing DelaunayTriangulation, DelaunayTriangulationConstructionError and
DelaunayTriangulationValidationError).
In `@tests/check_perturbation_stats.rs`:
- Around line 7-11: Add the crate-level attribute forbidding unsafe code to the
top of the file by inserting `#![forbid(unsafe_code)]` as the very first line in
tests/check_perturbation_stats.rs (before the existing `use` statements that
import DelaunayRepairPolicy, DelaunayTriangulation, TopologyGuarantee,
InsertionOutcome, and ValidationPolicy) so the file complies with the
project-wide Rust safety policy.
In `@tests/delaunay_incremental_insertion.rs`:
- Around line 15-17: Add a crate-level ban on unsafe code by inserting
#![forbid(unsafe_code)] at the very top of this test file (before the existing
use declarations that import
delaunay::prelude::triangulation::construction::{ConstructionOptions,
DedupPolicy, DelaunayTriangulation, TopologyGuarantee, vertex, ...}). This will
satisfy the repository policy that every Rust source must explicitly forbid
unsafe code.
In `@tests/euler_characteristic.rs`:
- Around line 17-20: Add the crate-level forbid for unsafe code at the very top
of this test file (before any use/imports) to enforce the project policy; modify
tests/euler_characteristic.rs so that the crate attribute forbidding unsafe code
is present above the existing use
delaunay::prelude::triangulation::construction::{...} import line and ensure it
applies to the whole crate.
---
Outside diff comments:
In `@src/core/algorithms/locate.rs`:
- Line 1: Add the crate-level forbid by inserting the attribute to prohibit
unsafe code at the top of this Rust file: add #![forbid(unsafe_code)] as the
very first item in src/core/algorithms/locate.rs so the module-level policy is
enforced; ensure it appears before any module docs or use/statements so the
compiler applies the ban to all functions/types in this file (e.g., the
point-location functions and any helper structs/enums defined here).
In `@src/core/triangulation.rs`:
- Around line 4129-4168: The telemetry currently conflates full and link-only
validations; update validate_after_insertion so the metrics reflect the
InsertionValidationWork variant returned by
validation_after_insertion_work(suspicion): call
validation_after_insertion_work, log the trigger as now, then in the match arm
for InsertionValidationWork::FullValidation wrap the is_valid() call with the
existing topology_validation_* counters (or increment the existing counters) and
in the InsertionValidationWork::RequiredTopologyLinks arm record a separate pair
of metrics (e.g., topology_link_validation_calls and
topology_link_validation_nanos) or skip touching the existing
topology_validation_* counters if you prefer exclusivity; adjust any uses in
log_validation_trigger_if_enabled/is_valid/validate_required_topology_links if
they currently assume a single metric so telemetry distinguishes full vs
link-only runs.
- Around line 5519-5535: The telemetry call record_conflict_region_telemetry
should not be invoked for caller-provided conflict buffers (the match arm
handling LocateResult::InsideCell(...), Some(cells) where cells came from the
caller); remove or skip that call in these arms and instead either increment a
new provided_conflict_cells/provided_conflict_calls metric or only call
record_conflict_region_telemetry from the code paths that actually perform
discovery (the branches that call find_conflict_region() or
collect_local_exterior_conflict_seed_cells()); update both occurrences (the
InsideCell Some(cells) arm and the analogous branch around lines 5579–5590) to
use ensure_non_empty_conflict_cells for fallback behavior but not count them as
discovered conflict-region telemetry.
---
Duplicate comments:
In `@tests/large_scale_debug.rs`:
- Around line 1198-1200: The validation cadence is advanced using idx + 1 even
when no vertex was actually inserted, causing validation to drift; change the
gating so that you only call validation_cadence.should_validate(...) when this
iteration inserted a vertex (use summary.inserted as the counter rather than idx
+ 1) and only invoke dt.as_triangulation().is_valid() when summary.inserted
indicates a successful insertion this loop; update the condition around
had_cells/validation_cadence.should_validate(...) to check summary.inserted (and
that the current vertex was inserted) before running is_valid().
---
Nitpick comments:
In `@tests/proptest_triangulation.rs`:
- Around line 104-105: The current proptest acceptance uses
prop_assume!(matched_cells > 0) which allows near-vacuous runs; change the
assumption to require multiple matched comparisons (e.g.,
prop_assume!(matched_cells > 1) or prop_assume!(matched_cells >= 2)) and, if
available, also assert the total number of cells considered is >1 so the test
exercises non-trivial triangulation behavior; update the prop_assume! call that
references matched_cells accordingly to ensure the test only accepts cases with
at least two matched cells (and adjust any related counters or variable names
that drive the comparison loop).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
Run ID: 368d087e-e37e-4097-9c47-b0ccc952767d
📒 Files selected for processing (51)
CONTRIBUTING.mdREADME.mdbenches/ci_performance_suite.rsbenches/large_scale_performance.rsbenches/profiling_suite.rsbenches/topology_guarantee_construction.rsdocs/api_design.mddocs/code_organization.mddocs/dev/debug_env_vars.mddocs/dev/rust.mddocs/diagnostics.mddocs/numerical_robustness_guide.mddocs/topology.mddocs/validation.mddocs/workflows.mdexamples/delaunayize_repair.rsexamples/diagnostics.rsexamples/numerical_robustness.rsexamples/pachner_roundtrip_4d.rsexamples/topology_editing_2d_3d.rsexamples/zero_allocation_iterator_demo.rsjustfilesrc/core/algorithms/locate.rssrc/core/operations.rssrc/core/triangulation.rssrc/lib.rssrc/triangulation.rssrc/triangulation/builder.rssrc/triangulation/delaunay.rssrc/triangulation/locality.rssrc/triangulation/validation.rstests/allocation_api.rstests/check_perturbation_stats.rstests/circumsphere_debug_tools.rstests/conflict_region_verification.rstests/dedup_batch_construction.rstests/delaunay_edge_cases.rstests/delaunay_incremental_insertion.rstests/delaunay_public_api_coverage.rstests/delaunay_repair_fallback.rstests/euler_characteristic.rstests/insert_with_statistics.rstests/large_scale_debug.rstests/prelude_exports.rstests/proptest_delaunay_triangulation.rstests/proptest_euler_characteristic.rstests/proptest_flips.rstests/proptest_orientation.rstests/proptest_triangulation.rstests/regressions.rstests/serialization_vertex_preservation.rs
- Count only full insertion validation runs in topology validation telemetry while preserving required link checks. - Exclude caller-provided conflict buffers from discovered conflict-region telemetry. - Keep large-scale debug validation cadence tied to inserted vertices and tighten triangulation proptest acceptance. - Add explicit unsafe-code forbids to remaining bench, example, and test crates.
- Default batch construction to the real-vertex max-volume initial simplex and every-insertion Delaunay repair cadence. - Add focused construction diagnostics telemetry for repair phase timing, queued work, seed frontiers, and construction phase costs. - Keep local repair postcondition replay tied to observed repair work so skipped flip candidates cannot suppress required validation. - Update large-scale debug docs and recipes to reflect the new defaults. BREAKING CHANGE: ConstructionTelemetry moved to the triangulation diagnostics module and focused diagnostics prelude. Batch construction now defaults to InitialSimplexStrategy::MaxVolume and DelaunayRepairPolicy::EveryInsertion, so callers relying on the previous construction telemetry import path or EveryN(2) batch cadence must opt into the old behavior explicitly.
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
tests/large_scale_debug.rs (1)
190-230:⚠️ Potential issue | 🟠 Major | 🏗️ Heavy liftIncremental mode does not collect or populate telemetry and slow-insertion samples.
InsertionSummaryfieldstelemetryandslow_insertionsare populated only in batch mode viaFrom<ConstructionStatistics>(which aggregatesInsertionTelemetryacross insertions). Incremental mode callsrecord_insertedandrecord_skipped, but these methods only handle attempt counts and skip statistics—they do not touchtelemetryorslow_insertions.
InsertionStatistics(the only data available to incremental insertions) contains only attempt counts and flags, not the detailed per-insertion metrics (locate_calls,conflict_region,cavity_insertion, etc.) needed to populate aConstructionTelemetry. To fix this, either:
- Gate the telemetry and slow-insertion output sections to batch mode only, or
- Thread
InsertionTelemetrythrough incremental insertions and accumulate it inrecord_inserted.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@tests/large_scale_debug.rs` around lines 190 - 230, InsertionSummary's telemetry and slow_insertions are never populated in incremental mode because record_inserted/record_skipped only consume InsertionStatistics; either gate telemetry/slow-insertion reporting to batch mode only (where From<ConstructionStatistics> provides ConstructionTelemetry) or thread per-insertion telemetry through incremental paths and accumulate it in record_inserted: modify record_inserted(&mut self, stats: InsertionStatistics) to accept an additional InsertionTelemetry (or replace InsertionStatistics with a struct that contains both) and update the InsertionSummary.telemetry accumulator and slow_insertions vector when an insertion qualifies; update callers that perform incremental insertions to pass the telemetry (or add a conversion that builds minimal telemetry from existing stats) and ensure record_skipped/record continue to call record(...) for attempt/skips as before.
🧹 Nitpick comments (3)
src/core/triangulation.rs (1)
9550-9663: 🏗️ Heavy liftBroaden these new exterior-insertion regressions across dimensions.
These assertions exercise a const-generic code path, but the new seed/telemetry coverage is effectively 3D-only. A small
pastey-based smoke matrix for at least D=2 through D=5 would make the new frontier/seed bookkeeping much safer against dimension-specific regressions.As per coding guidelines "Unit tests must cover known values, error paths, and dimension-generic correctness. Dimension-generic tests MUST cover D=2 through D=5 whenever feasible; use
pasteymacros for per-dimension tests".tests/large_scale_debug.rs (1)
884-981: ⚡ Quick winGate the new telemetry dump behind the test diagnostics feature.
These new summary sections are committed test diagnostics, but they are always emitted whenever stats exist. Please put the detailed telemetry /
slow_insertionsdump behindcfg(feature = "diagnostics")or an equivalent test-only gate so the harness follows the repo’s diagnostics policy.As per coding guidelines, "Gate non-essential test/benchmark diagnostics behind feature flags; use
diagnosticsfor test diagnostics".Also applies to: 1008-1035
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@tests/large_scale_debug.rs` around lines 884 - 981, The new always-on telemetry dump in print_construction_telemetry (and the similar sections around the slow_insertions dump) must be gated behind the test diagnostics feature: wrap the detailed output blocks (the body that calls print_construction_phase_telemetry, print_timing_summary, print_local_repair_phase_telemetry, print_local_repair_frontier_telemetry, print_local_repair_work_telemetry, print_repair_seed_accumulation_telemetry and the global_conflict_scans printing) in a cfg(feature = "diagnostics") conditional (or an equivalent test-only gate) so that these summaries only emit when the "diagnostics" feature is enabled; leave the early has_data() check and minimal behavior unchanged.src/core/algorithms/flips.rs (1)
5123-5143: The weaker postcondition helper is only used from finalize_bulk_construction and is gated by full validation.
finalize_bulk_constructioncallsrun_seeded_completion_repair(which usesverify_local_repair_postconditionwith deferred connectivity), then performsnormalize_and_promote_positive_orientation(), and finally calls fullvalidate()when the default topology guarantee (TopologyGuarantee::PLManifold) requires it. SincePLManifold.requires_vertex_links_at_completion()istrue, the deferred connectivity check cannot escape—final validation gates the triangulation before return.The concern about escaping non-Euclidean paths is addressed by construction path isolation: only
finalize_bulk_constructionusesrepair_delaunay_local_single_pass; periodic/toroidal builds use different paths. However, the silentcontinueincollect_cells_around_ridgewhen a neighbor cell is missing (line 7381–7382) silently narrows the ridge star without explicit error or logging, which could mask broken adjacency. While subsequent checks validate that actual neighbor cells contain the ridge vertices, the missing-cell case should either error explicitly or be logged at debug level per the coding guidelines on tracing diagnostics.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/core/algorithms/flips.rs` around lines 5123 - 5143, The silent continue in collect_cells_around_ridge (the loop that skips when a neighbor cell is missing) can mask broken adjacency; update that site to either return an explicit error (propagated up to callers like repair_delaunay_local_single_pass/finalize_bulk_construction) or emit a debug-level trace with context (ridge id, missing neighbor key) before continuing so diagnostics are available; ensure this change composes with verify_local_repair_postcondition/verify_repair_postcondition_with_topology which uses deferred connectivity during finalize_bulk_construction and does not assume missing neighbors are benign.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@src/core/algorithms/flips.rs`:
- Around line 7377-7387: The loop currently hides a missing neighbor by using
`continue` when `tds.cell(neighbor_key)` returns None; instead treat a missing
neighbor as a corruption and fail immediately so caller (e.g.
`build_k3_flip_context*`) sees the real issue. Replace the `continue` in the `if
let Some(neighbor_cell) = tds.cell(neighbor_key) else { continue; };` branch
with an explicit error return (preferably a new descriptive variant like
`FlipError::DanglingRidgeNeighbor { cell_key, neighbor_key }`, or at minimum
return `FlipError::InvalidRidgeAdjacency { cell_key }`) so missing
`neighbor_key` is surfaced rather than turning into an undersized ridge star.
Ensure the error includes `cell_key` and the `neighbor_key` to aid diagnostics.
In `@src/core/triangulation.rs`:
- Around line 5667-5673: The code forces delaunay_repair_required true whenever
caller_provided_conflict_cells is set, which wrongly forces repair for normal
Delaunay/Bowyer‑Watson insertions; change the TryInsertImplOk construction to
use only outcome.delaunay_repair_required (remove the "||
caller_provided_conflict_cells") so insert_with_conflict_region()'s computed
flag is respected; if there is some deliberate external-supplied case that must
force repair, handle that explicitly at the callsite (e.g., in
insert_with_conflict_region or the Delaunay wrapper) rather than OR-ing here.
---
Outside diff comments:
In `@tests/large_scale_debug.rs`:
- Around line 190-230: InsertionSummary's telemetry and slow_insertions are
never populated in incremental mode because record_inserted/record_skipped only
consume InsertionStatistics; either gate telemetry/slow-insertion reporting to
batch mode only (where From<ConstructionStatistics> provides
ConstructionTelemetry) or thread per-insertion telemetry through incremental
paths and accumulate it in record_inserted: modify record_inserted(&mut self,
stats: InsertionStatistics) to accept an additional InsertionTelemetry (or
replace InsertionStatistics with a struct that contains both) and update the
InsertionSummary.telemetry accumulator and slow_insertions vector when an
insertion qualifies; update callers that perform incremental insertions to pass
the telemetry (or add a conversion that builds minimal telemetry from existing
stats) and ensure record_skipped/record continue to call record(...) for
attempt/skips as before.
---
Nitpick comments:
In `@src/core/algorithms/flips.rs`:
- Around line 5123-5143: The silent continue in collect_cells_around_ridge (the
loop that skips when a neighbor cell is missing) can mask broken adjacency;
update that site to either return an explicit error (propagated up to callers
like repair_delaunay_local_single_pass/finalize_bulk_construction) or emit a
debug-level trace with context (ridge id, missing neighbor key) before
continuing so diagnostics are available; ensure this change composes with
verify_local_repair_postcondition/verify_repair_postcondition_with_topology
which uses deferred connectivity during finalize_bulk_construction and does not
assume missing neighbors are benign.
In `@tests/large_scale_debug.rs`:
- Around line 884-981: The new always-on telemetry dump in
print_construction_telemetry (and the similar sections around the
slow_insertions dump) must be gated behind the test diagnostics feature: wrap
the detailed output blocks (the body that calls
print_construction_phase_telemetry, print_timing_summary,
print_local_repair_phase_telemetry, print_local_repair_frontier_telemetry,
print_local_repair_work_telemetry, print_repair_seed_accumulation_telemetry and
the global_conflict_scans printing) in a cfg(feature = "diagnostics")
conditional (or an equivalent test-only gate) so that these summaries only emit
when the "diagnostics" feature is enabled; leave the early has_data() check and
minimal behavior unchanged.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
Run ID: 6573e2e2-65b2-4812-adf5-077702eae3fe
📒 Files selected for processing (15)
README.mddocs/dev/debug_env_vars.mddocs/invariants.mddocs/workflows.mdjustfilesrc/core/algorithms/flips.rssrc/core/cell.rssrc/core/facet.rssrc/core/triangulation.rssrc/lib.rssrc/triangulation.rssrc/triangulation/delaunay.rssrc/triangulation/diagnostics.rstests/large_scale_debug.rstests/prelude_exports.rs
✅ Files skipped from review due to trivial changes (4)
- docs/invariants.md
- docs/dev/debug_env_vars.md
- README.md
- docs/workflows.md
🚧 Files skipped from review as they are similar to previous changes (3)
- justfile
- src/triangulation.rs
- src/lib.rs
- Report dangling ridge-neighbor references as a typed flip error instead of silently skipping missing cells during ridge-star collection. - Preserve the computed insertion repair requirement when callers provide conflict cells, avoiding unnecessary Delaunay repair for clean cavities. - Gate large-scale debug telemetry output behind diagnostics and document the Homebrew uv path for Codex validation shells.
- Retain bounded slow local-repair samples with insertion index, trigger, seed frontier size, checked work, flips, queue size, and phase timing. - Expose the repair trigger and sample types through the diagnostics prelude. - Print slow local-repair samples in the large-scale debug harness. - Archive the N=1 repair performance plan for #341.
- Replace full TDS validation after Delaunay repair flips with a local cavity validation path that checks affected cells, vertex mappings, neighbor links, facet multiplicity, and orientation consistency. - Keep full validation for generic flip APIs and final/global safety checks. - Document the 3K and 10K #341 measurements and identify post-insertion topology validation as the next hotspot.
- Validate post-insertion topology and orientation over touched cell frontiers when the mutation scope is known. - Keep strict PL-manifold and unknown-scope paths on global required-link validation. - Report stale local validation scopes as typed CellNotFound errors. - Document the 3K and 10K timing progress for the N=1 repair plan.
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/topology/manifold.rs (1)
123-152:⚠️ Potential issue | 🟠 Major | 🏗️ Heavy liftAvoid fabricating lifted periodic IDs as synthetic
VertexKeys.This mapping is not collision-free: two different
(vertex, offset)pairs can alias each other, and they can also alias a realVertexKey. If that happens, the link/ridge graph silently merges distinct periodic images and manifold validation can return the wrong answer. Please use a dedicated lifted-vertex identifier for graph keys instead of re-encoding them into slotmap keys.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/topology/manifold.rs` around lines 123 - 152, The lifted_vertex_id function is fabricating periodic-image IDs as real VertexKey/KeyData values which can collide with other lifted pairs or actual VertexKey entries; replace this by introducing a dedicated lifted-vertex identifier type (e.g., LiftedVertexId or LiftedVertexKey) and use that as the graph key instead of returning VertexKey from lifted_vertex_id, update all call sites that currently rely on lifted_vertex_id (search for lifted_vertex_id, VertexKey::from, KeyData::from_ffi and uses in graph/ridge/link construction and manifold validation) to accept and store the new type, and remove any logic that inserts or interprets these synthetic values into slotmap/VertexKey storage so real VertexKey space remains untouched.
🧹 Nitpick comments (1)
docs/archive/issue_341_n1_repair_plan.md (1)
31-98: ⚡ Quick winAdd reproducibility metadata to each benchmark block.
These timing snapshots will be much easier to trust and compare later if each section includes commit SHA, hardware/CPU, build profile, and the exact command/env used.
Proposed doc patch
### 3K 3D, `N=1` +_Benchmark context:_ +- Commit: `<sha>` +- Host: `<cpu / cores / ram>` +- Build/profile: `<cargo profile + rustc version>` +- Command/env: `<exact command + relevant DELAUNAY_* env vars>` + #### Before Scoped Post-Insertion Topology Validation @@ ### 10K 3D, `N=1` +_Benchmark context:_ +- Commit: `<sha>` +- Host: `<cpu / cores / ram>` +- Build/profile: `<cargo profile + rustc version>` +- Command/env: `<exact command + relevant DELAUNAY_* env vars>` + #### Before Scoped Post-Insertion Topology Validation🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@docs/archive/issue_341_n1_repair_plan.md` around lines 31 - 98, The benchmark sections (e.g., "3K 3D, `N=1`" and "10K 3D, `N=1`" and their three blocks "Before Scoped Post-Insertion Topology Validation", "After Scoped Post-Insertion Topology Validation", "After Local Insertion Orientation Validation") lack reproducibility metadata; add a short metadata header to each benchmark block that includes commit SHA, hardware/CPU model, build profile (e.g., release/debug, compiler flags), and the exact command and environment variables used to run the benchmark (shell command and any relevant ENV like OMP_NUM_THREADS), ensuring the metadata is placed immediately above the timing list for each of those blocks so readers can compare results reliably.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@src/core/tds.rs`:
- Around line 3599-3620: The helper currently proceeds after computing
mirror_idx without verifying the neighbor's back-reference, allowing
one-way/dangling frontier edges to be accepted; update the helper (the block
using mirror_idx, neighbor_cell, cell, and facet_idx) to check that
neighbor_cell.neighbors()[mirror_idx] equals Some(cell_key) (or otherwise
confirms the back-reference to cell) and if not return an
Err(TdsError::InvalidNeighbors { ... }) (matching the global
validate_coherent_orientation behavior) before computing
facet_vertices_in_cell_order and facet_permutation_parity so scoped adjacencies
cannot silently accept invalid one-way neighbors.
In `@src/core/triangulation.rs`:
- Around line 5331-5341: The returned repair_seed_cells currently only collects
new_cells via append_live_unique_cell_seeds, which omits the actual surviving
frontier produced by local facet repair (neighbor_repair_frontier); update the
code that builds repair_seed_cells before returning CavityInsertionOutcome so it
also appends the neighbor_repair_frontier (using append_live_unique_cell_seeds
or equivalent) so the post-repair frontier is included for follow-up validation
and Delaunay repair; make the same change in the other analogous location
handling CavityInsertionOutcome.
- Around line 4372-4385: The telemetry only records topology validation when
validation_after_insertion_work() returns
InsertionValidationWork::FullValidation, but
validate_after_insertion_with_scope() also performs real link/coherence work for
the RequiredTopologyLinks path; update the conditional(s) that currently match
(Some(InsertionValidationWork::FullValidation), Some(validation_started)) to
also treat InsertionValidationWork::RequiredTopologyLinks as a counted
validation variant (i.e., match both FullValidation and RequiredTopologyLinks),
and call Self::record_topology_validation_telemetry(telemetry,
Self::duration_nanos_saturating(validation_started.elapsed())) in those cases;
apply the same change to the other analogous block that checks for
FullValidation (the second occurrence mentioned).
In `@src/topology/manifold.rs`:
- Around line 483-485: The local pseudomanifold path currently uses bare
VertexKey-based helpers (cell_facet_vertices, facet_incident_handles, and the
facet_to_cells map) so periodic images collapse; update these helpers and all
local validator usages (including validate_local_pseudomanifold_for_cells and
the other occurrences around the indicated ranges) to operate on
lifted/periodic-aware keys: compute facet vertex lists as lifted vertex
identifiers (use the same lifting logic used by the ridge-link path), produce
periodic-aware facet keys, and make facet_incident_handles return incident
handles keyed by those periodic facet keys before inserting into facet_to_cells;
ensure all comparisons and hash/eq uses in
validate_local_pseudomanifold_for_cells and the other noted blocks use the
periodic-key type so distinct images do not merge.
---
Outside diff comments:
In `@src/topology/manifold.rs`:
- Around line 123-152: The lifted_vertex_id function is fabricating
periodic-image IDs as real VertexKey/KeyData values which can collide with other
lifted pairs or actual VertexKey entries; replace this by introducing a
dedicated lifted-vertex identifier type (e.g., LiftedVertexId or
LiftedVertexKey) and use that as the graph key instead of returning VertexKey
from lifted_vertex_id, update all call sites that currently rely on
lifted_vertex_id (search for lifted_vertex_id, VertexKey::from,
KeyData::from_ffi and uses in graph/ridge/link construction and manifold
validation) to accept and store the new type, and remove any logic that inserts
or interprets these synthetic values into slotmap/VertexKey storage so real
VertexKey space remains untouched.
---
Nitpick comments:
In `@docs/archive/issue_341_n1_repair_plan.md`:
- Around line 31-98: The benchmark sections (e.g., "3K 3D, `N=1`" and "10K 3D,
`N=1`" and their three blocks "Before Scoped Post-Insertion Topology
Validation", "After Scoped Post-Insertion Topology Validation", "After Local
Insertion Orientation Validation") lack reproducibility metadata; add a short
metadata header to each benchmark block that includes commit SHA, hardware/CPU
model, build profile (e.g., release/debug, compiler flags), and the exact
command and environment variables used to run the benchmark (shell command and
any relevant ENV like OMP_NUM_THREADS), ensuring the metadata is placed
immediately above the timing list for each of those blocks so readers can
compare results reliably.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
Run ID: 967d0fa6-6c6d-408d-8689-9d68c99875c4
📒 Files selected for processing (4)
docs/archive/issue_341_n1_repair_plan.mdsrc/core/tds.rssrc/core/triangulation.rssrc/topology/manifold.rs
- Preserve flip-created repair frontiers after full-reseed repair attempts so ridge-link checks can stay local when the mutation scope is known. - Harden scoped topology validation by rejecting one-way neighbor references and carrying repair frontier cells into follow-up Delaunay repair seeds. - Use dedicated lifted periodic vertex identities in manifold validation so periodic images never collide with real TDS vertex keys. - Add reproducibility metadata for the recorded #341 large-scale benchmark runs.
There was a problem hiding this comment.
🧹 Nitpick comments (3)
src/core/triangulation.rs (3)
4209-4238: ⚡ Quick winDocument the scoped-validation contract here.
This helper is the soundness boundary for the new local-validation path, but the new docs stop short of stating why the scoped checks are sufficient and when they intentionally fall back to global validation. Please add a numbered
REFERENCES.mdcitation plus a short conditioning note for the local-vs-global invariant tradeoff.As per coding guidelines "Algorithms must cite their source (Shewchuk, Bowyer-Watson, Edelsbrunner, etc.) in REFERENCES.md and document conditioning behaviour."
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/core/triangulation.rs` around lines 4209 - 4238, Add a concise scoped-validation contract comment above the function validate_required_topology_links_for_cells that states the soundness boundary: list the exact invariants checked (coherent orientation, local pseudomanifold, ridge-links, geometric cell orientation), explain that these local checks are sufficient only when topology_guarantee does not require vertex links and cells is non-empty, and document the explicit fallback condition that calls validate_required_topology_links() when vertex-link checks are required or when cells is empty. Also add a numbered citation in REFERENCES.md (e.g., "1. [author/source]") referencing the relevant algorithmic source(s) (e.g., Bowyer-Watson, Edelsbrunner, Shewchuk) and include a short note in the function comment pointing to that REFERENCES.md entry describing the conditioning and tradeoff between local vs global validation.
9937-9964: ⚡ Quick winAdd the positive telemetry regression too.
This only locks in the
0-call case. The risk in this PR was the opposite path—RequiredTopologyLinksdoing real work without incrementingtopology_validation_calls—so please add onePLManifoldclean-insertion assertion that expects the counter to increment.As per coding guidelines "Unit tests must cover known values, error paths, and dimension-generic correctness."
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/core/triangulation.rs` around lines 9937 - 9964, Add a positive telemetry regression: besides the existing test triangulation_policy_skipped_validation_does_not_increment_telemetry, add (or extend it with) a case where tri.set_topology_guarantee(TopologyGuarantee::PLManifold) (or use the PLManifold guarantee) and perform a clean insertion via insert_with_statistics_seeded_indexed_detailed so that the insertion does real topology work (the RequiredTopologyLinks path); then assert that detail.telemetry.topology_validation_calls has incremented (non-zero) for that PLManifold insertion. Use the same Triangulation, ValidationPolicy::OnSuspicion, and insert_with_statistics_seeded_indexed_detailed call patterns to mirror the existing test but assert a positive counter for the PLManifold case.
7543-7564: ⚡ Quick winExtend these scoped-validation regressions beyond 3D.
The new insertion/scoped-validation path is generic over
D, but the added regressions only lock in the behavior in 3D. I'd addpasteycoverage through at leastD=5for one happy-path scoped-validation test and one strict fallback test so high-dimensional drift shows up before merge.As per coding guidelines "Unit tests must cover known values, error paths, and dimension-generic correctness; dimension-generic tests must cover D=2 through D=5 whenever feasible" and "Use 'pastey' macros to generate per-dimension tests for dimension-generic correctness."
Also applies to: 7590-7628
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/core/triangulation.rs` around lines 7543 - 7564, Add dimension-generic pasted tests for D=2..=5 mirroring the existing 3D regressions: create paste-generated test cases for both the "scoped strict fallback" (based on test_scoped_strict_validation_falls_back_to_global_vertex_links) and a "happy-path scoped-validation" test, parameterizing Triangulation::<FastKernel<f64>, (), (), D>::new_with_tds(...), validate_after_insertion_with_scope(...), SuspicionFlags::default(), ValidationPolicy::OnSuspicion and TopologyGuarantee::PLManifoldStrict; adapt or add generic fixture builders (e.g., build_invalid_vertex_link_tds_<D> or a generic build_invalid_vertex_link_tds::<D>()) so each pasted test uses the correct expected_vertex_key for that dimension, and replicate the same match/assert logic per-dimension to ensure coverage for D=2..=5 (also apply the same paste expansion to the other related test block around the second reported range).
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Nitpick comments:
In `@src/core/triangulation.rs`:
- Around line 4209-4238: Add a concise scoped-validation contract comment above
the function validate_required_topology_links_for_cells that states the
soundness boundary: list the exact invariants checked (coherent orientation,
local pseudomanifold, ridge-links, geometric cell orientation), explain that
these local checks are sufficient only when topology_guarantee does not require
vertex links and cells is non-empty, and document the explicit fallback
condition that calls validate_required_topology_links() when vertex-link checks
are required or when cells is empty. Also add a numbered citation in
REFERENCES.md (e.g., "1. [author/source]") referencing the relevant algorithmic
source(s) (e.g., Bowyer-Watson, Edelsbrunner, Shewchuk) and include a short note
in the function comment pointing to that REFERENCES.md entry describing the
conditioning and tradeoff between local vs global validation.
- Around line 9937-9964: Add a positive telemetry regression: besides the
existing test
triangulation_policy_skipped_validation_does_not_increment_telemetry, add (or
extend it with) a case where
tri.set_topology_guarantee(TopologyGuarantee::PLManifold) (or use the PLManifold
guarantee) and perform a clean insertion via
insert_with_statistics_seeded_indexed_detailed so that the insertion does real
topology work (the RequiredTopologyLinks path); then assert that
detail.telemetry.topology_validation_calls has incremented (non-zero) for that
PLManifold insertion. Use the same Triangulation, ValidationPolicy::OnSuspicion,
and insert_with_statistics_seeded_indexed_detailed call patterns to mirror the
existing test but assert a positive counter for the PLManifold case.
- Around line 7543-7564: Add dimension-generic pasted tests for D=2..=5
mirroring the existing 3D regressions: create paste-generated test cases for
both the "scoped strict fallback" (based on
test_scoped_strict_validation_falls_back_to_global_vertex_links) and a
"happy-path scoped-validation" test, parameterizing
Triangulation::<FastKernel<f64>, (), (), D>::new_with_tds(...),
validate_after_insertion_with_scope(...), SuspicionFlags::default(),
ValidationPolicy::OnSuspicion and TopologyGuarantee::PLManifoldStrict; adapt or
add generic fixture builders (e.g., build_invalid_vertex_link_tds_<D> or a
generic build_invalid_vertex_link_tds::<D>()) so each pasted test uses the
correct expected_vertex_key for that dimension, and replicate the same
match/assert logic per-dimension to ensure coverage for D=2..=5 (also apply the
same paste expansion to the other related test block around the second reported
range).
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
Run ID: a4506964-c4f0-46cd-afa6-bfae1423f8ed
📒 Files selected for processing (6)
docs/archive/issue_341_n1_repair_plan.mdsrc/core/algorithms/flips.rssrc/core/tds.rssrc/core/triangulation.rssrc/topology/manifold.rssrc/triangulation/delaunay.rs
🚧 Files skipped from review as they are similar to previous changes (2)
- src/core/tds.rs
- src/core/algorithms/flips.rs